73 research outputs found

    L'accumulo dell'energia nei sistemi di produzione e utilizzazione fortemente variabili.

    Get PDF
    Il recente sviluppo della produzione di energia elettrica da fonti rinnovabili non programmabili, in particolare solare e eolica, ha accresciuto la necessità di compensare le relative fluttuazioni di energia immessa in rete, per garantire la stabilità del sistema elettrico. Parallelamente si è assistito ad un aumento della variabilità della domanda di energia da parte delle utenze passive, causata dalle evoluzioni socio-tecnologiche in corso. In questa tesi sono stati analizzati i servizi in potenza e in energia che gli accumuli elettrochimici possono fornire ai gestori di rete e alle utenze. L’analisi è stata condotta servendosi di tre casi studio: un’abitazione, un sistema di trazione a funicolare e una centrale fotovoltaica. Si sono affrontate le problematiche di base legate al dimensionamento e alla gestione di un accumulo elettrico e i benefici ottenuti su un sistema già in funzione. La centrale fotovoltaica è stata oggetto della sperimentazione di un sistema di compensazione connesso in media tensione. Si sono studiati dei metodi preliminari per realizzare una programmabilità dell’impianto di produzione utilizzando un modello a reti neurali. Sono state affrontate, inoltre, le problematiche di interconnessione tra il compensatore, la centrale fotovoltaica e la rete di distribuzione

    A MapReduce solution for associative classification of big data

    Get PDF
    Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address big datasets even with modest hardware support. Comparisons with two state-of-the-art distributed learning algorithms are also discussed in terms of accuracy, model complexity, and computation time

    Towards priority-awareness in autonomous intelligent systems

    Get PDF
    In Autonomous and Intelligent systems (AIS), the decision-making process can be divided into two parts: (i) the priorities of the requirements are determined at design-time; (ii) design selection follows where alternatives are compared, and the preferred alternatives are chosen autonomously by the AIS. Runtime design selection is a trade-off analysis between non-functional requirements (NFRs) that uses optimisation methods, including decision-analysis and utility theory. The aim is to select the design option yielding the highest expected utility. A problem with these techniques is that they use a uni-scalar cumulative utility value to represent a combined priority for all the NFRs. However, this uni-scalar value doesn't give information about the varying impacts of actions under uncertain environmental contexts on the satisfaction priorities of individual NFRs. In this paper, we present a novel use of Multi-Reward Partially Observable Markov Decision Process (MR-POMDP) to support reasoning of separate NFR priorities. We discuss the use of rewards in MR-POMDPs as a way to support AIS with (a) priority-aware decision-making; and (b) maintain service-level agreement, by autonomously tuning NFRs' priorities to new contexts and based on data gathered at runtime. We evaluate our approach by applying it to a substantial Network case

    Towards an architecture integrating complex event processing and temporal graphs for service monitoring

    Get PDF
    Software is becoming more complex as it needs to deal with an increasing number of aspects in volatile environments. This complexity may cause behaviors that violate the imposed constraints. A goal of runtime service monitoring is to determine whether the service behaves as intended to potentially allow the correction of the behavior. It may be set up in advance the infrastructure to allow the detections of suspicious situations. However, there may also be unexpected situations to look for as they only become evident during data stream monitoring at runtime produced by te system. The access to historic data may be key to detect relevant situations in the monitoring infrastructure. Available technologies used for monitoring offer different trade-offs, e.g. in cost and flexibility to store historic information. For instance, Temporal Graphs (TGs) can store the long-term history of an evolving system for future querying, at the expense of disk space and processing time. In contrast, Complex Event Processing (CEP) can quickly react to incoming situations efficiently, as long as the appropriate event patterns have been set up in advance. This paper presents an architecture that integrates CEP and TGs for service monitoring through the data stream produced at runtime by a system. The pros and cons of the proposed architecture for extracting and treating the monitored data are analyzed. The approach is applied on the monitoring of Quality of Service (QoS) of a data-management network case study. It is demonstrated how the architecture provides rapid detection of issues, as well as the ability to access to historical data about the state of the system to allow for a comprehensive monitoring solution

    On the characterization and software implementation of general protein lattice models.

    Get PDF
    models of proteins have been widely used as a practical means to computationally investigate general properties of the system. In lattice models any sterically feasible conformation is represented as a self-avoiding walk on a lattice, and residue types are limited in number. So far, only two- or three-dimensional lattices have been used. The inspection of the neighborhood of alpha carbons in the core of real proteins reveals that also lattices with higher coordination numbers, possibly in higher dimensional spaces, can be adopted. In this paper, a new general parametric lattice model for simplified protein conformations is proposed and investigated. It is shown how the supporting software can be consistently designed to let algorithms that operate on protein structures be implemented in a lattice-agnostic way. The necessary theoretical foundations are developed and organically presented, pinpointing the role of the concept of main directions in lattice-agnostic model handling. Subsequently, the model features across dimensions and lattice types are explored in tests performed on benchmark protein sequences, using a Python implementation. Simulations give insights on the use of square and triangular lattices in a range of dimensions. The trend of potential minimum for sequences of different lengths, varying the lattice dimension, is uncovered. Moreover, an extensive quantitative characterization of the usage of the so-called "move types" is reported for the first time. The proposed general framework for the development of lattice models is simple yet complete, and an object-oriented architecture can be proficiently employed for the supporting software, by designing ad-hoc classes. The proposed framework represents a new general viewpoint that potentially subsumes a number of solutions previously studied. The adoption of the described model pushes to look at protein structure issues from a more general and essential perspective, making computational investigations over simplified models more straightforward as well

    Potentials found: SA vs. CG (triangular lattices).

    No full text
    <p>Relative variation of minimum potentials obtained with simulated annealing respect to the corresponding values from chain growth optimizers, in the case of triangular lattices.</p

    Two conformations of the same sequence (HI4) in the 2D square and triangular lattices.

    No full text
    <p>The HP potential is −16 for the former, and −31 for the latter.</p

    Experimental evaluation (by CgOptimizer) of the trend of minimum potential vs. dimension for different sequences on square lattices.

    No full text
    <p>In the legend, for each sequence, both length and number of H residues are specified.</p

    Comparison of the chain growth algorithm outcomes for square and triangle lattices, referring to the corresponding coordination numbers.

    No full text
    <p>The tests have been carried out over the HI sequences; the minimum potential value is shown on the left, and the runtime on the right.</p
    • …
    corecore